Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve and optimize semi-broadcast behavior #921

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

AmyrAhmady
Copy link
Member

Improve and optimize semi-broadcast behavior by creating and sending a receiver list to RakServer::Send and RakServer::RPC, instead of manually calling them for each player when we want to broadcast to a list of players, like broadcasting sync packets to streamed players.

This change improves both memory usage and performance noticeably (specially on crowded servers) due to removing the need to allocate packet for every single player you are broadcasting your data to through either packets or RPCs.

@Y-Less
Copy link
Collaborator

Y-Less commented Apr 28, 2024

Is there some way to move the getNetworks code in to PacketHelper? The need to add ICore to several components just so they can call core.getNetworks feels like a very leaky abstraction. This is only needed for one of the PacketHelper methods, not the others, which could raise questions from an API user POV about why that one not the others (obviously there is a reason, but it feels a bit arbitrary). Plus it also slightly diminishes the "Helper" part of the name as users need to do more work.

@AmyrAhmady
Copy link
Member Author

  1. adding an ICore instance is only happening in one, actor entity file.
    For the rest of them I'm not adding a class variable to keep an instance of core, rather just using the one that is already available

  2. PacketHelper is a completely standalone function that takes args to use, it's not anywhere in the main code base, rather in shared project (network). So it works kinda like an api to be used from outside to perform an action. With current implementation (or any other that anyone else has in mind, because it's simply not possible) you need a way to broadcast a specific set of data to every player, which eventually ends up in calling INetwork broadcast functions. I can't think of any other way for those PacketHelper helper functions to stay the way they are, and somehow still be able to call functions from a network instance created and stored somewhere else.

I'm open to ideas anyway, if anyone thinks there's a better of doing it, knowing PacketHelper helper functions require having access to INetwork anyway

@Y-Less
Copy link
Collaborator

Y-Less commented Apr 28, 2024

No I'm not sure what the best solution would be. I'll have a little longer think, but it may well transpire that this is the best available method.

@AmyrAhmady
Copy link
Member Author

Explaining my last commit:

The change in open.mp server project itself is done because I wasn't checking if broadcastListSize is 0, we shouldn't even call RakServer::Send (or ::RPC), this can happen when a player is alone in the server, so a broadcastList with size of 1 is created and passed to raknet functions without any effect or reason. So now it checks for size being 0, and deallocate it right after when it is not sent and handled by raknet.

Changes in RakNet on the other hand, basically getting rid of Span usage and instead it's just using a pointer to an array of RakNet::PlayerIndex and an integer determining our array size. Also adds a new BufferCommand type BCS_SEND_TO_LIST so we handle our broadcasting to specific lists differently in compared to BCS_SEND, and later deallocate the list

@Y-Less
Copy link
Collaborator

Y-Less commented Apr 29, 2024

For reference, the code/concept itself seems fine to me. I was just wondering if there was a very slightly better way to do one part, but it isn't a blocker by any means.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants